Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Medical cyber-physical systems (MCPS) are increasingly adopting learning-enabled components (LECs) to enhance their decision-making capabilities. Due to the safety-critical nature of MCPS, these systems must maintain high performance on both expected and unexpected input data. Therefore, ensuring the robustness of LE-MCPS is crucial for their successful deployment. Existing research predominantly focuses on robustness tosynthetic adversarial examples, crafted by adding imperceptible perturbations to clean input data. However, these synthetic adversarial examples do not accurately reflect the most challenging real-world scenarios, especially in the context of healthcare data. Consequently, robustness to synthetic adversarial examples may not necessarily translate to robustness againstnaturally occurring adversarial examples. We propose a method to evaluate the robustness of LE-MCPS to natural adversarial examples. The method curates naturally adversarial datasets leveraging probabilistic labels obtained from automated weakly supervised labeling which combines noisy and cheap-to-obtain labeling heuristics. Based on these labels, the method adversarially orders the input data and uses this ordering to construct a sequence of increasingly adversarial datasets for assessing robustness. Our evaluation on six MCPS case studies and two non-medical case studies demonstrates (1) the efficacy and statistical validity of our approach to generating naturally adversarial datasets and (2) the utility of our robustness evaluation in classifying robust and non-robust LE-MCPS.more » « less
-
Recent advancements in person recognition have raised concerns about identity privacy leaks. Gait recognition through millimeter-wave radar provides a privacy-centric method. However, it is challenged by lower accuracy due to the sparse data these sensors capture. We are the first to investigate a cross-modal method, IdentityKD, to enhance gait-based person recognition with the assistance of facial data. IdentityKD involves a training process using both gait and facial data, while the inference stage is conducted exclusively with gait data. To effectively transfer facial knowledge to the gait model, we create a composite feature representation using contrastive learning. This method integrates facial and gait features into a unified embedding that captures the unique identityspecific information from both modalities. We employ two distinct contrastive learning losses. One minimizes the distance between embeddings of data pairs from the same person, enhancing intraclass compactness, while the other maximizes the distance between embeddings of data pairs from different individuals, improving inter-class separability. Additionally, we use an identity-wise distillation strategy, which tailors the training process for each individual, ensuring that the model learns to distinguish between different identities more effectively. Our experiments on a dataset of 36 subjects, each providing over 5000 face-gait pairs, demonstrate that IdentityKD improves identity recognition accuracy by 6.5% compared to baseline methods.more » « less
-
Incorporating learning based components in the current state-of-the-art cyber-physical systems (CPS) has been a challenge due to the brittleness of the underlying deep neural networks. On the bright side, if executed correctly with safety guarantees, this has the ability to revolutionize domains like autonomous systems, medicine, and other safety-critical domains. This is because it would allow system designers to use high-dimensional outputs from sensors like camera and LiDAR. The trepidation in deploying systems with vision and LiDAR components comes from incidents of catastrophic failures in the real world. Recent reports of self-driving cars running into difficult to handle scenarios is ingrained in the software components which handle such sensor inputs. The ability to handle such high-dimensional signals is due to the explosion of algorithms which use deep neural networks. Sadly, the reason behind the safety issues is also due to deep neural networks themselves. The pitfalls occur due to possible over-fitting and lack of awareness about the blind spots induced by the training distribution. Ideally, system designers would wish to cover as many scenarios during training as possible. However, achieving a meaningful coverage is impossible. This naturally leads to the following question: is it feasible to flag out-of-distribution (OOD) samples without causing too many false alarms? Such an OOD detector should be executable in a fashion that is computationally efficient. This is because OOD detectors often are executed as frequently as the sensors are sampled. Our aim in this article is to build an effective anomaly detector. To this end, we propose the idea of a memory bank to cache data samples which are representative enough to cover most of the in-distribution data. The similarity with respect to such samples can be a measure of familiarity of the test input. This is made possible by an appropriate choice of distance function tailored to the type of sensor we are interested in. Additionally, we adapt conformal anomaly detection framework to capture the distribution shifts with a guarantee of false alarm rate. We report the performance of our technique on two challenging scenarios: a self-driving car setting implemented inside the simulator CARLA with image inputs and autonomous racing car navigation setting with LiDAR inputs. From the experiments, it is clear that a deviation from the in-distribution setting can potentially lead to unsafe behavior. It should be noted that not all OOD inputs lead to precarious situations in practice, but staying in-distribution is akin to staying within a safety bubble and predictable behavior. An added benefit of our memory-based approach is that the OOD detector produces interpretable feedback for a human designer. This is of utmost importance since it recommends a potential fix for the situation as well. In other competing approaches, such feedback is difficult to obtain due to reliance on techniques which use variational autoencoders.more » « less
An official website of the United States government

Full Text Available